Write Test Plans That Catch Bugs Before Users Do — QA Engineering
By the end of this page, you will understand how QA Engineers write and execute test plans using pytest and vitest — and how AI agents can generate comprehensive test suites automatically.
Testing & Verification — The 2-Minute Overview
Think about the last time you bought a car. You didn't see the thousands of crash tests, brake tests, engine stress tests, and safety inspections behind that vehicle. You just turned the key and drove. But somebody had to verify that every part works, every edge case is handled, and every safety standard is met — before it reached the showroom. That verification is QA Engineering. The diagram below is that map, zoomed out.
You Already Know QA — You Just Don't Know It Yet
You've been doing QA every time you proofread an important email before hitting send. Let's prove it.
Imagine you're sending a job application email to your dream company:
✉️ The Email Proofreading Analogy
Step 1 — You check structure: Subject line present? Greeting correct? Attachment included?
🔗 QA Layer: ① UNIT TESTING — Test individual components in isolation. Does each function return the correct output?
Step 2 — You check flow: Does paragraph 1 connect to paragraph 2? Does the closing match the tone?
🔗 QA Layer: ② INTEGRATION TESTING — Test component interactions. Do modules work together correctly?
Step 3 — You check edge cases: What if they open it on mobile? What if the attachment is too large? What if the link is broken?
🔗 QA Layer: ③ EDGE-CASE TESTING — Test boundaries, errors, and unusual scenarios.
Step 4 — You read it as the recipient: Does it make sense to someone who doesn't know your context?
🔗 QA Layer: ④ ACCEPTANCE TESTING — Does the product meet the user's acceptance criteria?
The Complete Mapping
| Email Proofreading | QA Engineering | Level |
|---|---|---|
| Check subject, greeting, attachment | Test each function individually | ① Unit Tests |
| Check paragraph flow and coherence | Test module interactions | ② Integration Tests |
| Check mobile rendering, link validity | Test edge cases and boundaries | ③ Edge-Case Tests |
| Read as the recipient | Validate against acceptance criteria | ④ Acceptance Tests |
You just learned QA without writing a single test.
The 5 Pillars of QA Engineering
1. Test Strategy
Not all tests are equal. The strategy defines what to test, at what level, with what priority.
The testing pyramid: many unit tests (fast, cheap), fewer integration tests (moderate), few e2e tests (slow, expensive). The QA Engineer maps each acceptance criterion to a test level and sets coverage targets.
| Level | What It Tests | Speed | Cost | Quantity |
|---|---|---|---|---|
| Unit | Individual functions | Fast (ms) | Low | Many (hundreds) |
| Integration | Module interactions, API calls | Medium (seconds) | Medium | Moderate (dozens) |
| E2E | Full user journeys | Slow (minutes) | High | Few (handful) |
2. pytest (Backend Testing)
pytest is the Python testing standard — simple, powerful, and extensible.
Fixtures for setup/teardown. Parametrize for testing multiple inputs. Markers for categorizing tests. Coverage reports for identifying gaps.
| Concept | What It Means | When to Use |
|---|---|---|
| Fixtures | Reusable setup/teardown for tests | Database connections, mock data |
| Parametrize | Run same test with different inputs | Validating multiple scenarios |
| Markers | Tag tests (slow, integration, smoke) | Selective test execution |
| Coverage | Measure % of code executed by tests | Gap identification |
3. vitest (Frontend Testing)
vitest is the Vite-native testing framework — fast, modern, and compatible with Jest.
Component testing for React components. Snapshot testing for UI regression. Mock testing for API calls. DOM testing for user interactions.
| Concept | What It Means | When to Use |
|---|---|---|
| Component Tests | Test React components in isolation | Every UI component |
| Snapshots | Capture and compare rendered output | UI regression detection |
| Mocks | Simulate API responses | Tests without backend dependency |
| DOM Testing | Simulate user clicks and inputs | User interaction flows |
4. Definition of Done
Code isn't "done" when it compiles. It's "done" when all tests pass, coverage meets target, and acceptance criteria are verified.
The Definition of Done is a checklist: all unit tests pass, integration tests pass, coverage > 80%, no critical defects, acceptance criteria verified, code reviewed, and merged to DEV.
| Criterion | What It Means | Verified By |
|---|---|---|
| All Tests Pass | Zero failures across all test levels | CI/CD pipeline |
| Coverage Target Met | ≥80% line coverage | Coverage report |
| No Critical Defects | Zero P0/P1 bugs open | Defect log |
| Acceptance Criteria Met | Every AC verified with a test | Test-to-AC mapping |
5. Defect Reporting & Triage
A bug report without reproduction steps is noise. A bug report with root cause is a gift.
Good defect reports include: summary, steps to reproduce, expected vs. actual behavior, severity, and root cause hypothesis. Triage categorizes by severity and assigns to the right developer.
| Severity | Definition | Response Time |
|---|---|---|
| P0 — Critical | System unusable, data loss risk | Immediate — stop current work |
| P1 — High | Major feature broken, workaround exists | Within sprint |
| P2 — Medium | Minor feature broken, low impact | Next sprint |
| P3 — Low | Cosmetic, no functional impact | Backlog |
The Complete Mapping
| # | Pillar | What It Answers | Key Decision |
|---|---|---|---|
| ① | Test Strategy | What to test and at what level? | Pyramid: unit > integration > e2e |
| ② | pytest | How to test the backend? | Fixtures, parametrize, coverage |
| ③ | vitest | How to test the frontend? | Components, snapshots, mocks |
| ④ | Definition of Done | When is code truly "done"? | Tests + coverage + AC verification |
| ⑤ | Defect Reporting | How to report and triage bugs? | Severity + reproduction + root cause |
That's it. Master these 5 pillars, master QA.
Try It Yourself — A Starter Prompt for Test Plan Generation
This prompt gives you a working starting point. For the complete prompt — with test-to-AC mapping, coverage gap analysis, and defect triage workflows — see the full course chapter →.
You are a Senior QA Engineer with experience in pytest and vitest.
I need a test plan for:
{{PASTE YOUR FEATURE CODE OR REQUIREMENTS}}
Cover these 5 areas:
1. TEST STRATEGY — Map each requirement to a test level (unit, integration, e2e).
2. BACKEND TESTS — Write pytest test cases for every backend function. Include edge cases.
3. FRONTEND TESTS — Write vitest test cases for every UI component. Include user interaction tests.
4. COVERAGE TARGETS — Set coverage targets per module and identify potential gaps.
5. DEFINITION OF DONE — Define the checklist for "done" including test pass rates and coverage thresholds.
For each area, provide: the test cases and a brief justification.
Format as a structured document with tables where appropriate.
What This Prompt Covers vs. What It Misses
| Skill | Lite Prompt (Free) | Full Prompt (Course) | Impact of Missing It |
|---|---|---|---|
| Test strategy mapping | ✅ Covered | ✅ Covered | — |
| pytest + vitest tests | ✅ Covered | ✅ Covered | — |
| Coverage targets | ✅ Covered | ✅ Covered | — |
| Test-to-acceptance-criteria mapping | ❌ Missing | ✅ Every AC linked to specific tests | AC exists but no test verifies it — discovered in UAT |
| Negative test cases | ⚠️ "Edge cases" | ✅ Explicit negative paths: invalid input, auth failure, concurrency | Happy path tested thoroughly — first bad input crashes the system |
| Test data strategy | ❌ Missing | ✅ Fixtures, factories, seed data design | Tests use hardcoded data that breaks when schema changes |
| Flaky test detection | ❌ Missing | ✅ Retry policies and environment isolation | 5% of tests fail randomly — team ignores all test failures. Trust erodes. |
| Performance test hooks | ❌ Missing | ✅ Response time assertions within functional tests | Feature works correctly but takes 8 seconds — no test caught the latency |
The Lite Prompt gets you to ~60% quality. Good enough to have a test plan. Not good enough to catch the bugs that reach production.
Real-World Example: Test Plan for a User Registration Feature
The Requirement
"Test a user registration feature: email validation, password strength check, duplicate account detection, welcome email trigger, and database record creation."
Lite Prompt Output — High-Level Test Plan
① Test Strategy
Unit tests for email validation and password check. Integration test for registration flow. E2E test for full signup journey.
② Backend Tests (pytest)
test_valid_email, test_invalid_email, test_strong_password, test_weak_password, test_duplicate_email, test_registration_creates_user.
③ Frontend Tests (vitest)
test_form_renders, test_submit_with_valid_data, test_error_on_invalid_email.
④ Coverage Target
80% line coverage for registration module.
⑤ Definition of Done
All tests pass. 80% coverage. No P0 bugs.
What a QA Lead Would Catch
| Area | Lite Output Says | What's Missing | Real-World Consequence |
|---|---|---|---|
| Strategy | "Unit + Integration + E2E" | No test priority. Which tests run first in CI? Which block deployment? | CI takes 20 minutes. Critical unit test failure hidden behind slow E2E suite. Bug discovered late. |
| Backend | "test_duplicate_email" | No concurrency test. What if 2 users register with same email simultaneously? | Race condition: both registrations succeed. Two accounts with same email. Data integrity broken. |
| Frontend | "test_error_on_invalid_email" | No test for password strength indicator. No test for form state after error. | User types weak password, no visual feedback. User submits, gets server error. Bad UX. |
| Coverage | "80% coverage" | Which 20% is uncovered? Is it error handling code? That's the most critical 20%. | Coverage reports "80% ✅" but the uncovered code is the exception handlers. First real error = unhandled crash. |
| DoD | "All tests pass, 80% coverage, no P0" | No AC-to-test mapping. Is "welcome email sent" tested? | Registration works but welcome email never fires. Nobody noticed because no test checks it. |
The pattern: The Lite Prompt asks "what are the test cases?" The full course asks "what are the test cases, what do they miss, and what bug escapes to production?"
Ready to Write Test Plans That Catch Everything?
- ✅ The complete prompt with AC mapping, negative test generation, and flaky test detection
- ✅ An AI agent that writes and executes test plans automatically
- ✅ Assessment + coding challenges to verify you can test, not just describe tests
Go from "I write tests" to "I write test suites that catch bugs before users do."